Software Defined Networking Concepts
نویسندگان
چکیده
ion allowing the control of low level behavior in switching devices. Such an approach could turn out to be beneficial, since it would simplify the deployment of new more efficient schemes for low level switch operations [20]. On the other hand, while moving all control operations to a logically centralized controller has the advantage of easier network management, it can also raise scalability issues if physical implementation of the controller is also centralized. Therefore, it might be beneficial to retain some of the logic in the switches. For instance in the case of DevoFlow [21], which is a modification of the OpenFlow model, the packet flows are distinguished into two categories: small (“mice”) flows handled directly by the switches and large (“elephant”) flows requiring the intervention of the controller. Similarly, in the DIFANE [22] controller intermediate switches are used for storing the necessary rules and the controller is relegated to the simple task of partitioning the rules over the switches. Another issue of SDN switches is that the forwarding rules used in the case of SDN are more complex than those of conventional networks, using wildcards for forwarding packets, considering multiple fields of the packet like source and destination addresses, ports, application etc. As a result the switching hardware cannot easily cope with the management of packets and flows. In order for the forwarding operation to be fast ASICs using TCAM are required. Unfortunately, such specialized hardware is expensive and power-consuming and as a result only a limited number of forwarding entries for flow-based forwarding schemes can be supported in each switch, hindering network scalability. A way to cope with this would be to introduce an assisting CPU to the switch or somewhere nearby to perform not only control plane but also data plane functionalities, e.g., let the CPU forward the “mice” flows [23] or to introduce new architectures which would be more expressive and would allow more actions related to packet processing to be performed [24]. The issue of hardware limitations is not only restricted to fixed networks but is extended to the wireless and mobile domains as well. The wireless data plane needs to be redesigned in order to offer more useful abstractions similarly to what happened with the data plane of fixed networks. While the data plane abstractions offered by protocols like OpenFlow support the idea of decoupling the control from the data plane, they cannot be extended to the wireless and mobile field unless the underlying hardware (e.g., switches in backhaul cellular networks and wireless access points) starts providing equally sophisticated and useful abstractions [6]. Regardless of the way that SDN switches are implemented, it should be made clear that in order for the new paradigm to gain popularity, backwards compatibility is a very important factor. While pure SDN switches that completely lack integrated control exist, it is the hybrid approach (i.e. support of SDN along with traditional operation and protocols) that would probably be the most successful at these early steps of SDN [12]. The reason is that while the features of SDN present a compelling solution for many realistic scenarios, the infrastructure in most enterprise networks still follows the conventional approach. Therefore, an intermediate hybrid network form would probably ease the transition to SDN. 3.3.3 SDN Controllers As already mentioned, one of the core ideas of the SDN philosophy is the existence of a network operating system placed between the network infrastructure and the application layer. This network operating system is responsible for coordinating and managing the resources of the whole network and for revealing an abstract unified view of all components to the applications executed on top of it. This idea is analogous to the one followed in a typical computer system, where the operating system lies between the hardware and the user space and is responsible for managing the hardware resources and providing common services for user programs. Similarly, network administrators and developers are now presented with a homogeneous environment easier to program and configure much like a typical computer program developer would. The logically centralized control and the generalized network abstraction it offers makes the SDN model applicable to a wider range of applications and heterogeneous network technologies compared to the conventional networking paradigm. For instance, consider a heterogeneous environment composed of a fixed and a wireless network comprised by a large number of related network devices (routers, switches, wireless access points, middleboxes etc.). In the traditional networking paradigm each network device would require individual low level configuration by the network administrator in order to operate properly. Moreover, since each device targets a different networking technology, it would have its own specific management and configuration requirements, meaning that extra effort would be required by the administrator to make the whole network operate as intended. On the other hand, with the logically centralized control of SDN, the administrator would not have to worry about low level details. Instead, the network management would be performed by defining a proper high level policy, leaving the network operating system responsible for communicating with and configuring the operation of network devices. Having discussed the general concepts behind the SDN controller, the following subsections take a closer look at specific design decisions and implementation choices made at this core component that can prove to be critical for the overall performance and scalability of the network. 3.3.3.1 Centralization of control in SDN As already discussed, the SDN architecture specifies that the network infrastructure is logically controlled by a central entity responsible for management and policy enforcement. However, it should be made clear that logically centralized control does not necessarily also imply physical centralization. There have been various proposals for physically centralized controllers, like for instance NOX [19] and Maestro [25]. A physically centralized control design simplifies the controller implementation. All switches are controlled by the same physical entity, meaning that the network is not subject to consistency related issues, with all the applications seeing the same network state (which comes from the same controller). Despite its advantages, this approach suffers from the same weakness that all centralized systems do, i.e. the controller acts as a single point of failure for the whole network. A way to overcome this is by connecting multiple controllers to a switch, allowing a backup controller to take over in the event of a failure. In this case, all controllers need to have a consistent view of the network, otherwise applications might fail to operate properly. Moreover the centralized approach can raise scalability concerns, since all network devices need to be managed by the same entity. One approach that further generalizes the idea of using multiple controllers over the network is to maintain a logically centralized but physically decentralized control plane. In this case, each controller is responsible for managing only one part of the network, but all controllers communicate and maintain a common network view. Therefore, applications view the controller as a single entity, while in reality control operations are performed by a distributed system. The advantage of this approach, apart from not having a single point of failure anymore, is the increase in performance and scalability, since only a part of the network needs to be managed by each individual controller component. Some well-known controllers that belong to this category are Onix [26] and HyperFlow [27]. One potential downside of decentralized control is once more related to the consistency of the network state among controller components. Since the state of the network is distributed, it is possible that applications served by different controllers might have a different view of the network, which might make them operate improperly. A hybrid solution that tries to encompass both scalability and consistency is to use two layers of controllers like the Kandoo [28] controller does. The bottom layer is composed by a group of controllers which do not have knowledge of the whole network state. These controllers only run control operations which require knowing the state of a single switch (local information only). On the other hand the top layer is a logically centralized controller responsible for performing network-wide operations that require knowledge of the whole network state. The idea is that local operations can be performed faster this way and do not incur any additional load to the high-level central controller, effectively increasing the scalability of the network. Apart from the ideas related to the level of physical centralization of controllers, there have been other proposed solutions related to their logical decentralization. The idea of logical decentralization comes directly from the early era of programmable networks and from the Tempest project. Recall that the Tempest architecture allowed multiple virtual ATM networks to operate on top of the same set of physical switches. Similarly, there have been proposals for SDN proxy controllers like FlowVisor [29] which allow multiple controllers to share the same forwarding plane. The motivation for this idea was to enable the simultaneous deployment of experimental and enterprise networks over the same infrastructure without affecting one another. Before concluding our discussion on the degree of centralization with SDN controllers it is important to examine the concerns which can be raised regarding their performance and applicability over large networking environments. One of the most frequent concerns raised by SDN skeptics is the ability of SDN networks to scale and be responsive in cases of high network load. This concern comes mainly from the fact that in the new paradigm control moves out of network devices and goes in a single entity responsible for managing the whole network traffic. Motivated by this concern, performance studies of SDN controller implementations [30] have revealed that even physically centralized controllers can perform really well, having very low response times. For instance, it has been shown that even primitive single-threaded controllers like NOX can handle an average workload of up to 200 thousand new flows per second with a maximum latency of 600ms for networks composed of up to 256 switches. Newer multi-threaded controller implementations have been shown to perform significantly better. For instance, NOXMT [31], can handle 1.6 million new flows per second in a 256-switch network with an average response time of 2ms in a commodity eight-core machine of 2GHz CPUs. Newer controller designs targeting large industrial servers promise to improve the performance even further. For instance the McNettle [32] controller claims to be able to serve networks of up to 5000 switches using a single controller of 46 cores with a throughput of over 14 million flows per second and latency under 10ms. Another important performance concern raised in the case of a physically decentralized control plane is the way that controllers are placed within the network, as the network performance can be greatly affected by the number and the physical location of controllers, as well as by the algorithms used for their coordination. In order to address this, various solutions have been proposed, from viewing the placement of controllers as an optimization problem [33] to establishing connections of this problem to the fields of local algorithms and distributed computing for developing efficient controller coordination protocols [34]. A final concern raised in the case of physically distributed SDN controllers is related to the consistency of the network state maintained at each controller when performing policy updates, due to concurrency issues that might occur by the errorprone, distributed nature of the logical controller. The solutions of such a problem can be similar to those of transactional databases, with the controller being extended with a transactional interface defining semantics for either completely committing a policy update or aborting [35]. 3.3.3.2 Management of traffic Another very important design issue of SDN controllers is related to the way that traffic is managed. The decisions about traffic management can have a direct impact on the performance of the network, especially in cases of large networks composed of many switches and with high traffic loads. We can divide the problems related to traffic management into two categories; control granularity and policy enforcement. Control granularity The control granularity applied over network traffic refers to how fine or coarsegrained the controller inspection operations should be in relation to the packets traversing the network [12]. In conventional networks each packet arriving at a switch is examined individually and a routing decision is made as to where the packet should be forwarded depending on the information it carries (e.g. destination address). While this approach generally works for conventional networks, the same cannot be said for SDN. In this case the per-packet approach becomes infeasible to implement across any sizeable network, since all packets would have to pass through the controller which would need to construct a route for each one of them individually. Due to the performance issues raised by the per-packet approach most SDN controllers follow a flow based approach, where each packet is assigned to some flow according to a specific property (e.g. the packet’s source and destination address and the application it is related with). The controller sets up a new flow by examining the first packet arriving for that flow and configuring the switches accordingly. In order to further offload the controller, an extra coarse-grained approach would be to enforce control based on an aggregation flow-match instead of using individual flows. The main tradeoff when examining the level of granularity is the load in the controller versus the quality of service (QoS) offered to network applications. The more fine-grained the control, the higher the QoS. In the per-packet approach the controller can always make the best decisions for routing each individual packet, therefore leading to improved QoS. On the other end, enforcing control over an aggregation of flows means that the controller decisions for forwarding packets do not fully adapt to the state of the network. In this case packets might be forwarded through a suboptimal route, leading to degraded QoS. Policy enforcement The second issue in the management of traffic is related to the way that network policies are applied by the controller over network devices [12]. One approach, followed by systems like Ethane is to have a reactive control model, where the switching device consults the controller every time a decision for a new flow needs to be made. In this case, the policy for each flow is established to the switches only when an actual demand arises, making network management more flexible. A potential downside of this approach is the degradation of performance, due to the time required for the first packet of the flow to go to the controller for inspection. This performance drop could be significant, especially in cases of controllers which are physically located far away from the switch. An alternative policy enforcement approach would be to use a proactive control model. In this case the controller populates the flow tables ahead of time for any traffic that could go through the switches and then pushes the rules to all the switches of the network. Using this approach a switch no longer has to request directions by the controller to set up a new flow and instead can perform a simple lookup at the table already stored in the TCAM of the device. The advantage of proactive control is that it eliminates the latency induced by consulting the controller for every flow. 3.3.4 SDN Programming Interfaces As already mentioned, the communication of the controller with the other layers is achieved through a southbound API for the controller-switch interactions and through a northbound API for the controller-application interactions. In this section, we briefly discuss the main concepts and issues related to SDN programming by separately examining each point of communication. 3.3.4.1 Southbound communication The southbound communication is very important for the manipulation of the behavior of SDN switches by the controller. It is the way that SDN attempts to “program” the network. The most prominent example of a standardized southbound API is OpenFlow [1]. Most projects related to SDN assume that the communication of the controller with the switches is OpenFlow-based and therefore it is important to make a detailed presentation of the OpenFlow approach. However it should be made clear that OpenFlow is just one (rather popular) out of many possible implementations of controller-switch interactions. Other alternatives like for example DevoFlow [21] also exist, attempting to solve performance issues that OpenFlow faces. Overview of OpenFlow Following the SDN principle of decoupling the control and data planes, OpenFlow provides a standardized way of managing traffic in switches and of exchanging information between the switches and the controller, as Figure 3.2 illustrates. The OpenFlow switch is composed of two logical components. The first component contains one or more flow tables responsible for maintaining the information required by the switch in order to forward packets. The second component is an OpenFlow client, which is essentially a simple API allowing the communication of the switch with the controller. Figure 3.2 Design of an OpenFlow switch and communication with the controller The flow tables consist of flow entries, each of which defines a set of rules determining how the packets belonging to that particular flow will be managed by the switch (i.e. how they will be processed and forwarded). Each entry in the flow table has three fields: i) A packet header defining the flow, ii) An Action determining how the packet should be processed and iii) Statistics, which keep track of information like the number of packets and bytes of each flow and the time since a packet of the flow was last forwarded. Once a packet arrives at the OpenFlow switch, its header is examined and the packet is matched to the flow that has the most similar packet header field. If a matching flow is found, the action defined in the Action field is performed. These actions include the forwarding of the packet to a particular port in order to be routed through the network, the forwarding of the packet in order to be examined by the controller or the rejection of the packet. If the packet cannot be matched to any flow, it is treated according to the action defined in a table-miss flow entry. The exchange of information between the switch and the controller happens by sending messages through a secure channel in a standardized way defined by the OpenFlow protocol. This way, the controller can manipulate the flows found in the flow table of the switch (i.e. add, update or delete a flow entry) either proactively or reactively as discussed in the basic controller principles. Since the controller is able to communicate with the switch using the OpenFlow protocol, there is no longer a need for network operators to interact directly with the switch. A particularly compelling feature of OpenFlow is that the packet header field can be a wildcard, meaning that the matching to the header of the packet does not have to be exact. The idea behind this approach is that various network devices like routers, switches and middleboxes have a similar forwarding behavior, differing only in terms of which header fields they use for matching and the actions they perform. OpenFlow allows the use of any subset of these header fields for applying rules on traffic flows, meaning that it conceptually unifies many different types of network devices. For instance a router could be emulated by a flow entry using a packet header performing a match only on the IP address, while a firewall would be emulated through a packet header field containing additional information like the source and destination IP addresses and port numbers as well as the transport protocol employed. 3.3.4.2 Northbound API As already discussed, one of the basic ideas advocated in the SDN paradigm is the existence of a network operating system, lying between the network infrastructure and the high level services and applications, similarly to how a computer operating system lies between the hardware and the user space. Assuming such a centralized coordination entity and based on the basic operating system principles, a clearly defined interface should also exist in the SDN architecture for the interaction of the controller with applications. This interface should allow the applications to access the underlying hardware, manage the system resources and allow their interaction with other applications without having any knowledge of low level network information. In contrast to the southbound communication, where the interactions between the switches and the controller are well-defined through a standardized open interface (i.e. OpenFlow), there is currently no accepted standard for the interaction of the controller with applications [12]. Therefore, each controller model needs to provide its own methods for performing controller-application communication. Moreover, even the interfaces current controllers implement provide very low-level abstractions (i.e. flow manipulation), which make it difficult to implement applications with different and many times conflicting objectives that are based in more high-level concepts. As an example, consider a power management and a firewall application. The power management application needs to re-route traffic using as few links as possible in order to deactivate idle switches, while the firewall might need these extra switches to route traffic as they best fit the firewall rules. Leaving the programmer to deal with these conflicts could become a very complex and cumbersome process. To solve this problem many ideas have been proposed, advocating the use of highlevel network programming languages responsible for translating policies to low level flow constraints, which in turn will be used by the controller to manage the SDN switches. These network programming languages can also be seen as an intermediate layer in the SDN architecture, placed between the application layer and the controller in a similar manner as to how high-level programming languages like C++ and Python exist on top of the assembly language for hiding the complex low level details of the assembly language from the programmer. Some examples of such high-level network programming languages include Frenetic [38] and Pyretic [39]. 3.3.5 SDN Application Domains In order to demonstrate the applicability of SDN in a wide range of networking domains, we briefly present two characteristic examples in which SDN could prove to be beneficial: data centers and cellular networks. Of course, the list of SDN applications is not only limited to these domains, but is also extended in many others, from enterprise networks, WLANs and heterogeneous networks, to optical networks and the Internet of Things [6][12]. 3.3.5.1 Data Center Networks One of the most important requirements for data center networks is to find ways to scale in order to support hundreds of thousands of servers and millions of virtual machines. However, achieving such scalability can be a challenging task from a network perspective. First of all, the size of forwarding tables increases along with the number of servers, leading to a requirement for more sophisticated and expensive forwarding devices. Moreover, traffic management and policy enforcement can become very important and critical issues, since datacenters are expected to continuously achieve high levels of performance. In traditional datacenters the aforementioned requirements are typically met through the careful design and configuration of the underlying network. This operation is in most cases performed manually by defining the preferred routes for traffic and by placing middleboxes at strategic choke points on the physical network. Obviously, this approach contradicts the requirement for scalability, since manual configuration can become a very challenging and error-prone task, especially as the size of the network grows. Additionally, it becomes increasingly difficult to make the data center operate at its full capacity, since it cannot dynamically adapt to the application requirements. The advantages that SDN offers to network management come to fill these gaps. By decoupling the control from the data plane, forwarding devices become much simpler and therefore cheaper. At the same time all control logic is delegated to one logically centralized entity. This allows the dynamic management of flows, the load balancing of traffic and the allocation of resources in a manner that best adjusts the operation of the data center to the needs of running applications, which in turn leads to increased performance [36]. Finally, placing middleboxes in the network is no longer required, since policy enforcement can now be achieved through the controller entity. 3.3.5.2 Cellular Networks The market of cellular mobile networks is perhaps one of the most profitable in telecommunications. The rapid increase in the number of cellular devices (e.g., smartphones and tablets) during the past decade has pushed the existing cellular networks to their limits. Recently, there has been significant interest in integrating the SDN principles in current cellular architectures like the 3G Universal Mobile Telecommunications System (UMTS) and the 4G Long Term Evolution (LTE) [37]. One of the main disadvantages of current cellular network architectures is that the core of the network has a centralized data flow, with all traffic passing through specialized equipment, which packs multiple network functions from routing to access control and billing (e.g. packet gateway in LTE), leading to an increase of the infrastructural cost due to the complexity of the devices and raising serious scalability concerns. Moreover, cell sizes of the access network tend to get smaller in order to cover the demands of the ever-increasing traffic and the limited wireless spectrum for accessing the network. However, this leads to increased interference among neighboring base stations and to the fluctuation of load from one base station to another due to user mobility, rendering the static allocation of resources no longer adequate. Applying the SDN principles to cellular networks promises to solve some of these deficiencies. First of all, decoupling the control from the data plane and introducing a centralized controller that has a complete view of the whole network allows network equipment to become simpler and therefore reduces the overall infrastructural cost. Moreover, operations like routing, real-time monitoring, mobility management, access control and policy enforcement can be assigned to different cooperating controllers making the network more flexible and easier to manage. Furthermore, using a centralized controller acting as an abstract base station simplifies the operations of load and interference management, no longer requiring the direct communication and coordination of base stations. Instead, the controller makes the decisions for the whole network and simply instructs the data plane (i.e. the base stations) on how to operate. One final advantage is that the use of SDN eases the introduction of virtual operators to the telecommunications market, leading to increased competitiveness. By virtualizing the underlying switching equipment all providers become responsible for managing the flows of their own subscribers through their own controllers, without the requirement to pay large sums for obtaining their own infrastructure. 3.3.6 Relation of SDN to network virtualization and NFV Two very popular technologies closely related to SDN are network virtualization and Network Functions Virtualization (NFV). In this subsection we briefly attempt to clarify their relationship to SDN, since these technologies tend to become the cause of confusion especially for those recently introduced to the concept of SDN. Network virtualization is the separation of the network topology from the underlying physical infrastructure. Through virtualization it is possible to have multiple ‘virtual’ networks deployed over the same physical equipment, with each of them having a much simpler topology compared to that of the physical network. This abstraction allows network operators to construct networks as they see fit without having to tamper with the underlying infrastructure which can turn out to be a difficult or even impossible process. For instance, through network virtualization it becomes possible to have a virtual local area network (VLAN) of hosts spanning multiple physical networks or to have multiple VLANs on top of a single physical subnet. The idea behind network virtualization of decoupling the network from the underlying physical infrastructure bears resemblance to that advocated by SDN for decoupling the control from the data plane and therefore naturally becomes a source of confusion. The truth is that none of the two technologies is dependent on the other. The existence of SDN does not readily imply network virtualization. Similarly, SDN is not necessarily a prerequisite for achieving network virtualization. On the contrary, it is possible to deploy a network virtualization solution over an SDN network, while at the same time an SDN network could be deployed in a virtualized environment. Since its appearance SDN has closely coexisted with network virtualization, which acted as one of the first and perhaps the most important use cases of SDN. The reason is that the architectural flexibility offered by SDN acted as an enabler for network virtualization. In other words, network virtualization can be seen as a solution focusing on a particular problem, while SDN is one (perhaps the best at this moment) architecture for achieving this. However, as already stressed, network virtualization needs to be seen independently from SDN. In fact, it has been argued by many that network virtualization could turn out to be even bigger technological innovation than SDN [7]. Another technology that is closely related but different from SDN is Network Functions Virtualization (NFV) [40]. NFV is a carrier-driven initiative with a goal to transform the way that operators architect networks by employing virtualization related technologies to virtualize network functions such as intrusion detection, caching, domain name service (DNS) and network address translation (NAT) so that they can run in software. Through the introduction of virtualization it is possible to run these functions over generic industry-standard high volume servers, switches and storage devices instead of using proprietary purpose-built network devices. This approach reduces operational and deployment costs, since operators no longer need to rely on expensive proprietary hardware solutions. Finally, flexibility in network management increases as it is possible to quickly modify or introduce new services to address changing demands. The decoupling of network functions from the underlying hardware is closely related to the decoupling of the control from the data plane advocated by SDN and therefore the distinction of the two technologies can be a bit vague. It is important to understand that even though closely related, SDN and NFV refer to different domains. NFV is complementary to SDN but does not depend on it and vice-versa. For instance, the control functions of SDN could be implemented as virtual functions based on the NFV technology. On the other hand, an NFV orchestration system could control the forwarding behavior of physical switches through SDN. However, neither technology is a requirement for the operation of other, but both could benefit from the advantages each can offer. 3.4 Impact of SDN to Research and Industry Having seen the basic concepts of SDN and some important applications of this approach, it is now time to briefly discuss the impact of SDN to the research community and the industry. While the focus of each interested party might be different, from designing novel solutions exploiting the benefits of SDN to developing SDN enabled products ready to be deployed in commercial environments, their involvement in the evolution of SDN helps in shaping the future of this technology. Seeing what the motivation and the focus of current SDN-related attempts will provide us with indications of what will potentially drive future research in this field. 3.4.1 Overview of Standardization activities and SDN summits Recently, several standardization organizations have started focusing on SDN, each working in providing standardized solutions for a different part of the SDN space. The benefits of such efforts are very significant, since standardization is the first step towards the wide adoption of a technology. The most relevant standardization organization for SDN is considered the Open Networking Foundation (ONF) [41], which is a non-profit industry consortium founded in 2011. It has more than 100 company-members including telecom operators, network and service providers, equipment vendors and networking and virtualization software suppliers. Its vision is to make SDN the new norm for networks by transforming the networking industry to a software industry through the open SDN standards. To achieve this, it attempts to standardize and commercialize SDN and its underlying technologies, with its main accomplishment the standardization of the OpenFlow protocol, which is also the first SDN standard. ONF has a number of working groups working in different aspects of SDN from forwarding abstractions, extensibility, configuration and management to educating the community on the SDN value proposition. The Internet Engineering Task Force (IETF), which is a major driving force in developing and promoting Internet standards, also has a number of working groups focusing on SDN in a broader scope than just OpenFlow. The Software-Defined Networking Research Group (SDNRG) [42] focuses on identifying solutions related to the scalability and applicability of the SDN model as well as for developing abstractions and programming languages useful in the context of SDN. Finally, it attempts to identify SDN use cases and future research challenges. On a different approach, the Interface to the Routing System (I2RS) [43] working group is developing an SDN strategy, counter to the OpenFlow approach, in which traditional distributed routing protocols can run on network hardware to provide information to a centrally located manager. Other SDN related IETF working groups include ALTO [44] for application layer traffic optimization using SDN and CDNI [45] studying how SDN can be used for Content Delivery Network (CDN) interconnection. Some study groups (SGs) of ITU’s Telecommunication Standardization Sector (ITU-T) [46] are also looking on SDN for public telecommunication networks. For instance Study Group 13 (SG13) is focusing on a framework of telecom SDN and on defining requirements of formal specification and verification methods for SDN. Study Group 11 (SG11) is developing requirements and architectures on SDN signaling, while Study Group 15 (SG15) has started discussions on transport SDN. Other standardization organizations that have also been interested in applying the SDN principles include the Optical Internet Forum (OIF) [47], the Broadband Forum (BBF) [48] and the Metro Ethernet Forum (MEF) [49]. OIF is responsible for promoting the development and deployment of interoperable optical networking systems and it supports a working group to define the requirements for a transport network SDN architecture. BBF is a forum for fixed line broadband access and core networks, working on a cloud-based gateway that could be implemented using SDN concepts. Finally, MEF has as its goal to develop, promote and certify technical specifications for carrier Ethernet services. One of its directions is to investigate whether MEF services could fit within an ONF SDN framework. Apart from the work performed on standardizing SDN solutions, there exist a number of summits for sharing and exploring new ideas and key developments produced in the SDN research community. The Open Networking Summit (ONS) is perhaps the most important SDN event having as a mission “to help the SDN revolution succeed by producing high-impact SDN events”. Other SDN related venues have also started emerging like for instance the SDN & NFV Summit for solutions on network virtualization, the SDN & OpenFlow World Congress, the SIGCOMM workshop on Hot Topics in Software Defined Networking (HotSDN) and the IEEE/IFIP International Workshop on SDN Management and Orchestration (SDNMO). 3.4.2 SDN in the Industry The advantages that SDN offers compared to traditional networking have also made the industry focus on SDN either for using it as a means to simplify management and improve services in their own private networks or for developing and providing commercial SDN solutions. Perhaps one of the most characteristic examples for the adoption of SDN in production networks is Google, which entered in the world of SDN with its B4 network [50] developed for connecting its data centers worldwide. The main reason for moving to the SDN paradigm, as explained by Google engineers, was the very fast growth of Google’s back-end network. While computational power and storage become cheaper as scale increases, the same cannot be said for the network. By applying SDN principles the company was able to choose the networking hardware according to the features it required, while it managed to develop innovative software solutions. Moreover the centralized network control made the network more efficient and fault tolerant providing a more flexible and innovative environment, while at the same time it led to a reduction of operational expenses. More recently, Google revealed Andromeda [51], a software defined network underlying its cloud, which is aimed at enabling Google’s services to scale better, cheaper and faster. Other major companies in the field of networking and cloud services like Facebook and Amazon are also planning on building their next generation network infrastructure based on the SDN principles. Networking companies have also started showing interest in developing commercial SDN solutions. This interest is not limited in developing specific products like OpenFlow switches and network operating systems, rather there is a trend for creating complete SDN ecosystems targeting different types of customers. For instance companies like Cisco, HP and Alcatel have entered the SDN market, presenting their own complete solutions intended for enterprises and cloud service providers, while telecommunication companies like Huawei are designing solutions for the next generation of telecom networks, with a specific interest in LTE and LTEAdvanced networks. In 2012, VMware acquired an SDN startup called Nicira in order to integrate its Network Virtualization Platform (NVP) to NSX, VMware’s own network virtualization and security platform for software-defined data centers. The list of major companies providing SDN solutions constantly grows, with many others like Broadcom, Oracle, NTT, Juniper and Big Switch Networks recognizing the benefits of SDN and proposing their own solutions.
منابع مشابه
Software Defined Networking and Network Function Virtualization: bridging the world of virtual networks
Software Defined Networking (SDN) and Network Function Virtualization (NFV) are two emerging and complementary concepts that can be considered as key players in the virtual networks world. In particular, SDN allows managing different network services through abstraction of higher-level functionality, while NFV leverages the use of virtualization technologies to virtualize specific network funct...
متن کاملA STRIDE-based Security Architecture for Software-Defined Networking
While the novelty of Software-Defined Networking (SDN) — the separation of network control and data planes — is appealing and simple enough to foster massive vendor support, the resulting impact on the security of communication networks infrastructures and their management may be tremendous. The paradigm change affects the entire networking architecture. It involves new IP-based management comm...
متن کاملThe Past, Present, and Future of Software Defined Networking
We trace the intellectual underpinnings of software defined networking (SDN) to earlier research threads spanning nearly 20 years and including active networking, early efforts to separate the control and data plane, and more recent work on a specific approach to SDN, namely the OpenFlow API. We highlight key concepts, as well as the technology pushes and application pulls that spurred each inn...
متن کاملSoftware-Defined Networking: State of the Art and Research Challenges
Plug-and-play information technology (IT) infrastructure has been expanding very rapidly in recent years. With the advent of cloud computing, many ecosystem and business paradigms are encountering potential changes and may be able to eliminate their IT infrastructure maintenance processes. Real-time performance and high availability requirements have induced telecom networks to adopt the new co...
متن کاملCost Analysis of SDN/NFV Architecture over 4G Infrastructure
Two complementary architectures, software defined networking (SDN) and network function virtualization (NFV) are emerging to comprehensively address several networking issues. In this work, we introduce the most embraced virtualization concepts proposed by SDN and NFV architectures. We quantitatively evaluate hardware and energy cost savings with these two SDN and NFV architectures compared to ...
متن کاملSDN AND ITS USE-CASES- NV AND NFV A State-of-the-Art Survey
Three concepts (a) network programmability by clear separation of data and control planes and (b) sharing of network infrastructure to provide multitenancy, including traffic and address isolation, in large data center networks and (c) replacing the functions that traditionally run on a specialized hardware, with the software-realizations that run on commodity servers have gained lot of attenti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014